Transfer Meta-Learning: Information- Theoretic Bounds and Information Meta-Risk Minimization

نویسندگان

چکیده

Meta-learning automatically infers an inductive bias by observing data from a number of related tasks. The is encoded hyperparameters that determine aspects the model class or training algorithm, such as initialization learning rate. assumes tasks belong to task environment, and are drawn same environment both during meta-training meta-testing. This, however, may not hold true in practice. In this paper, we introduce problem transfer meta-learning, which target meta-testing differ source observed meta-training. Novel information-theoretic upper bounds obtained on meta-generalization gap, measures difference between loss, available at meta-learner, average loss meta-test new, randomly selected, environment. first bound, captures meta-environment shift environments via KL divergence distributions. second, PAC-Bayesian third, single-draw account for log-likelihood ratio Furthermore, two meta-learning solutions introduced. For first, termed Empirical Meta-Risk Minimization (EMRM), derive optimality gap. referred Information (IMRM), minimizing bound. IMRM shown experiments potentially outperform EMRM.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Information and Meta Information

A model of information is presented, in which statements such as "the information sets are common knowledge" may be formally stated and proved. The model can aiso abe extended to include the statement: "this model is common knowledge" in a well-defined manner, using the fact that when an event A is common knowledge, it is common knowJedge that A is common knowledge. Finally, the model may also ...

متن کامل

Information Theoretic Learning

of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy INFORMATION THEORETIC LEARNING: RENYI’S ENTROPY AND ITS APPLICATIONS TO ADAPTIVE SYSTEM TRAINING By Deniz Erdogmus May 2002 Chairman: Dr. Jose C. Principe Major Department: Electrical and Computer Engineering Traditionally, second-order ...

متن کامل

Forced Information for Information-Theoretic Competitive Learning

We have proposed a new information-theoretic approach to competitive learning [1], [2], [3], [4], [5]. The information-theoretic method is a very flexible type of competitive learning, compared with conventional competitive learning. However, some problems have been pointed out concerning the information-theoretic method, for example, slow convergence. In this paper, we propose a new computatio...

متن کامل

Improved Upper Bounds on Information-theoretic Private Information Retrieval

Private Information Retrieval (PIR) schemes allow a user to retrieve the i-th bit of an n-bit database x, replicated in k servers, while keeping the value of i private from each server. A t-private PIR scheme protects the user's privacy from any collusion of up to t servers. The main cost measure for such schemes is their communication complexity. We introduce a new technique for the constructi...

متن کامل

Information-Theoretic Bounds on Target Recognition Performance

This paper derives bounds on the performance of statistical object recognition systems, wherein an image of a target is observed by a remote sensor. Detection and recognition problems are modeled as composite hypothesis testing problems involving nuisance parameters. We develop information–theoretic performance bounds on target recognition based on statistical models for sensors and data, and e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2022

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2021.3119605